尽管表示学习对于机器学习和人工智能的兴起至关重要,但仍有一个关键问题在使学习的表示有意义。为此,典型的方法是通过先前的概率分布正规化学习的表示形式。但是,这样的先验通常不可用或临时。为了解决这个问题,我们提出了一个动态约束的表示学习框架。我们不使用预定义的概率,而是将潜在表示限制为遵循特定的动力学,这是在动态系统中的表示形式学习的更自然的约束。我们的信念源于物理学的基本观察,尽管不同的系统可以具有不同的边缘化概率分布,但它们通常遵守相同的动态,例如牛顿和施罗宾格的方程。我们验证了不同系统的框架,包括真实的荧光DNA电影数据集。我们表明,我们的算法可以唯一识别不相关的,等距和有意义的潜在表示。
translated by 谷歌翻译
在过去的几年中,不同类型的数据驱动的人工智能(AI)技术已在科学的各个领域广泛采用,用于生成预测的黑盒模型。但是,由于其黑框的性质,在接受这些模型之前对这些模型建立信任至关重要。实现这一目标的一种方法是实施事后解释方案,该方案可以提出黑框模型预测背后的原因。在这项工作中,我们为此目的提出了一种经典的热力学启发方法:AI和其他黑盒范式(TERP)的热力学解释表示。 TERP通过构建线性的局部替代模型来起作用,该模型在所解释的实例周围的小社区中近似黑框模型的行为。通过采用简单的前向特征选择蒙特卡洛算法,TERP为所有可能的替代模型分配了解释性自由能评分,以选择最佳解释。此外,我们通过成功解释来自来自相关领域的数据集的四种不同类别的黑盒模型,将TERP验证为一种通常适用的方法,包括对图像进行分类,预测心脏病和分类生物分子构象。
translated by 谷歌翻译
The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
Reflections on glossy objects contain valuable and hidden information about the surrounding environment. By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e.g. from reflections on the human eye. However, this task is challenging because reflections depend jointly on object geometry, material properties, the 3D environment, and the observer viewing direction. Our approach converts glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective. Our key insight is to convert the object surface into a virtual sensor that captures cast reflections as a 2D projection of the 5D environment radiance field visible to the object. We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings in addition to beyond field-of-view novel-view synthesis, i.e. rendering of novel views that are only directly-visible to the glossy object present in the scene, but not the observer. Moreover, using the radiance field we can image around occluders caused by close-by objects in the scene. Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.
translated by 谷歌翻译
Deep learning based text-to-speech (TTS) systems have been evolving rapidly with advances in model architectures, training methodologies, and generalization across speakers and languages. However, these advances have not been thoroughly investigated for Indian language speech synthesis. Such investigation is computationally expensive given the number and diversity of Indian languages, relatively lower resource availability, and the diverse set of advances in neural TTS that remain untested. In this paper, we evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages. Based on this, we identify monolingual models with FastPitch and HiFi-GAN V1, trained jointly on male and female speakers to perform the best. With this setup, we train and evaluate TTS models for 13 languages and find our models to significantly improve upon existing models in all languages as measured by mean opinion scores. We open-source all models on the Bhashini platform.
translated by 谷歌翻译
视力变压器由于其出色的性能而越来越多地嵌入工业系统中,但是它们的记忆力和力量要求使它们部署到边缘设备是一项艰巨的任务。因此,现在,模型压缩技术被广泛用于在边缘设备上部署模型,因为它们减少了资源需求并使模型推理非常快速有效。但是,从安全角度来看,它们的可靠性和鲁棒性是安全至关重要应用中的另一个主要问题。对抗性攻击就像ML算法的光学幻象一样,它们可能会严重影响模型的准确性和可靠性。在这项工作中,我们研究了对抗样品在SOTA视觉变压器模型上跨3个SOTA压缩版本的可传递性,并推断出不同压缩技术对对抗攻击的影响。
translated by 谷歌翻译
动力学受部分微分方程(PDE)控制的物理系统在许多领域(从工程设计到天气预报)中找到了应用。从此类PDE中获取解决方案的过程对于大规模和参数化问题的计算昂贵。在这项工作中,使用LSTM和TCN等时间表预测开发的深度学习技术,或用于为CNN等空间功能提取而开发的,用于建模系统动力学,以占主导问题。这些模型将输入作为从PDE获得的连续时间步长的一系列高保真矢量解,并预测使用自动回归的后续时间步长的解决方案;从而减少获得此类高保真解决方案所需的计算时间和功率。这些模型经过数值基准测试(1D汉堡的方程式和Stoker的大坝断裂问题),以评估长期预测准确性,甚至在训练域之外(外推)。在向预测模型输入之前,使用非侵入性的降低订购建模技术(例如深度自动编码网络)来压缩高保真快照,以减少在线和离线阶段的复杂性和所需的计算。深层合奏被用来对预测模型进行不确定性量化,该模型提供了有关认知不确定性导致预测方差的信息。
translated by 谷歌翻译
端到端(E2E)模型已成为最新语音识别系统的默认选择。此类型号经过大量标记数据的培训,这些数据通常无法用于低资源语言。诸如自我监督学习和转移学习的诺言之类的技术尚未在培训准确的模型中有效。另一方面,在各种域和扬声器集合上收集标记的数据集非常昂贵。在这项工作中,我们通过公共资料中的印度语言,特别是来自印度广播电台的公共档案馆的印度语言的``采矿''文本和音频对展示了这些方法的廉价和有效替代方案。作为关键组件,我们将Needleman-Wunsch算法调整为与相应的音频片段对齐句子,并给定长音频和其转录本的PDF,同时由于OCR,无关紧要的文本和未转录的语音而对错误进行了强大的态度。因此,我们创建了Shrutilipi,这是一个数据集,其中包含超过6,400个小时的12个印度语言标签的音频,总计为495万个句子。平均而言,Shrutilipi导致2.3倍增加了公开可用的标签数据。我们在12种语言中与21种人类评估者建立了Shrutilipi的质量。我们还根据代表区域,说话者和提到的实体建立了Shrutilipi的多样性。值得注意的是,我们表明,将Shrutilipi添加到WAV2VEC模型的训练集中,导致在Indicsuperb基准上的7种语言中,平均降低了5.8 \%。对于具有最多基准的印地语(7),平均水平从18.8%下降到13.5%。这种改进扩展到有效的模型:对于构象异构体模型(比WAV2VEC小10倍),我们显示出2.3%的下降。最后,我们通过证明对其进行训练的模型对嘈杂的输入更强大,证明了Shrutilipi的多样性。
translated by 谷歌翻译